30 research outputs found

    Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution, appendix 2

    Get PDF
    This thesis reviews the technique established to clear channels in the Power Spectral Estimate by applying linear combinations of well known window functions to the autocorrelation function. The need for windowing the auto correlation function is due to the fact that the true auto correlation is not generally used to obtain the Power Spectral Estimate. When applied, the windows serve to reduce the effect that modifies the auto correlation by truncating the data and possibly the autocorrelation has on the Power Spectral Estimate. It has been shown in previous work that a single channel has been cleared, allowing for the detection of a small peak in the presence of a large peak in the Power Spectral Estimate. The utility of this method is dependent on the robustness of it on different input situations. We extend the analysis in this paper, to include clearing up to three channels. We examine the relative positions of the spikes to each other and also the effect of taking different percentages of lags of the auto correlation in the Power Spectral Estimate. This method could have application wherever the Power Spectrum is used. An example of this is beam forming for source location, where a small target can be located next to a large target. Other possibilities extend into seismic data processing. As the method becomes more automated other applications may present themselves

    Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution, appendix 4

    Get PDF
    The power spectrum for a stationary random process can be defined with the Wiener-Khintchine Theorem, which says that the power spectrum and the auto correlation function are a Fourier transform pair. To implement this theorem for signals that are discrete and of finite length we can use the Blackman-Tukey method. Blackman and Tukey (1958) show that a function w(tau), called a lag window, can be applied to the auto correlation estimates to obtain power spectrum estimates that are statistically stable. The Fourier transform of w(r) is called a spectral window. Typical choices for spectral windows show a distinct trade-off between the main lobe width and side lobe strength. A new idea for designing windows by taking linear combinations of the standard windows to produce hybrid windows was introduced by Smith (1985). We implement Smith's idea to obtain spectral windows with narrow main lobes and smaller (compared with typical windows) near side lobes. One of the main contributions of this thesis is that we show that Smith's problem is equivalent to a Quadratic Programming (QP) problem with linear equality and inequality constraints. A computer program was written to produce hybrid windows by setting up and solving the QP problem. We also developed and solved two variations of the original problem. The two variations involved changing the inequality constraints in both cases from non negativity on the combination coefficients to non negativity on the hybrid lag window itself. For the second variation, the window functions used to construct the hybrid window were changed to a frequency-variable set of truncated cosinusoids. A series of tests was run with the three computer programs to investigate the behavior of the hybrid spectral and lag windows. Emphasis was put on obtaining spectral windows with both relatively narrow main lobes and the lowest possible (for these algorithms) near side lobes. Some success was achieved for this goal. A 10 dB peak side lobe reduction over the rectangular spectral window without significant main lobe broadening was achieved. Also, average side lobe levels of -117 dB were reached at a cost of doubling the main lobe width (at the -3 dB point)

    Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. Appendix 7

    Get PDF
    Because of the interesting science which can be performed using a satellite attached by a very long tether to a mother vehicle in orbit, such as the Space Shuttle, NASA will deploy TSS-1 (Tethered Satellite System) in 1992. A very long tether (20 km in this case) has the possibility of undergoing oscillations of several different types, or modes, and higher harmonics of these modes. The purpose of this document is to describe a method for detecting the amplitude, frequency, and phase (and predicting future motion in the steady state) of these modes, in particular, the skiprope mode, using tethered satellite dynamics measurements. Specifically the rotation rate data about two orthogonal axes, calculated from output from satellite gyroscopes, are used. The data of interest are the satellite pitch and roll rate measurements. NASA has determined to use two methods to diagnose skiprope properties and predict future values. One of these, a Fourier transform domain approach, is the subject of this notebook. The main program and all subroutines are described along with the test plan for evaluating the Frequency Domain Skiprope Observer

    Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    Get PDF
    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given

    Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    Get PDF
    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface

    PHYS 6206

    Get PDF

    Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution, appendix 3

    Get PDF
    The Always-Convergent Iterative Noise Removal and Deconvolution Method of Ioup is applied as a single-filter in the transform domain to deconvolution with both narrow and wide Gaussian impulse response functions. The wraparound error for both cases is also studied. A method is developed by which one can find the optimum iteration number for single-filter iterative deconvolution of sampled data. The method employs the mean square error (MSE), the square of the difference between the deconvolved result and the input, for optimization. The MSE decreases as the deconvolution iterations proceed, but at the optimum iteration number, the MSE starts to increase. This procedure is repeated for signal-to-noise ratio of 10 to 150. The optimum iteration number and the MSE are plotted vs SNR. By knowing the SNR for a particular experiment, one can find the optimum iteration number and MSE

    Known source detection predictions for higher order correlators

    Get PDF
    The problem addressed in this paper is whether higher order correlation detectors can perform better in white noise than the cross correlation detector for the detection of a known transient source signal, if additional receiver information is included in the higher order correlations. While the cross correlation is the optimal linear detector for white noise, additional receiver information in the higher order correlations makes them nonlinear. In this paper, formulas that predict the performance of higher order correlation detectors of energy signals are derived for a known source signal. Given the first through fourth order signal moments and the noise variance, the formulas predict the SNR for which the detectors achieve a probability of detection of 0.5 for any level of false alarm, when noise at each receiver is independent and identically distributed. Results show that the performance of the cross correlation, bicorrelation, and tricorrelation detectors are proportional to the second, fourth, and sixth roots of the sampling interval, respectively, but do not depend on the observation time. Also, the SNR gains of the higher order correlation detectors relative to the cross correlation detector improve with decreasing probability of false alarm. The source signal may be repeated in higher order correlations, and gain formulas are derived for these cases as well. Computer simulations with several test signals are compared to the performance predictions of the formulas. The breakdown of the assumptions for signals with too few sample points is discussed, as are limitations on the design of signals for improved higher order gain. Results indicate that in white noise it is difficult for the higher order correlation detectors in a straightforward application to achieve better performance than the cross correlation. © 1998 Acoustical Society of America

    Known source detection predictions for higher order correlators

    Get PDF
    The problem addressed in this paper is whether higher order correlation detectors can perform better in white noise than the cross correlation detector for the detection of a known transient source signal, if additional receiver information is included in the higher order correlations. While the cross correlation is the optimal linear detector for white noise, additional receiver information in the higher order correlations makes them nonlinear. In this paper, formulas that predict the performance of higher order correlation detectors of energy signals are derived for a known source signal. Given the first through fourth order signal moments and the noise variance, the formulas predict the SNR for which the detectors achieve a probability of detection of 0.5 for any level of false alarm, when noise at each receiver is independent and identically distributed. Results show that the performance of the cross correlation, bicorrelation, and tricorrelation detectors are proportional to the second, fourth, and sixth roots of the sampling interval, respectively, but do not depend on the observation time. Also, the SNR gains of the higher order correlation detectors relative to the cross correlation detector improve with decreasing probability of false alarm. The source signal may be repeated in higher order correlations, and gain formulas are derived for these cases as well. Computer simulations with several test signals are compared to the performance predictions of the formulas. The breakdown of the assumptions for signals with too few sample points is discussed, as are limitations on the design of signals for improved higher order gain. Results indicate that in white noise it is difficult for the higher order correlation detectors in a straightforward application to achieve better performance than the cross correlation. © 1998 Acoustical Society of America

    Prediction of signal‐to‐noise ratio gain for passive higher‐order correlation detection of energy transients

    Get PDF
    In general, higher‐order correlation detectors perform well in passive detection for signals of high third‐ and fourth‐order moments. Previous studies by the authors have shown that the normalized third‐ and fourth‐order signal moments are reliable indicators of higher‐order correlation detector performance [Pflug et al. (1992b)]. For a deterministic energy transient of known moments through fourth order, it is possible to predict theoretically the amount of gain over an ordinary cross‐correlation detector for a bicorrelation or tricorrelation detector applied in a noise environment of known variance. In this paper, formulas that predict detector performance for passive detection at the minimum detectable level are derived. The noise is assumed to be stationary and zero mean with Gaussian correlation central ordinate probability density functions. To test the formulas, SNR detection and gain curves are generated using hypothesis testing and Monte Carlo simulations on a set of test signals. The test signals are created by varying the time width of a pulse‐like signal in a sampling window of fixed time duration, resulting in a set of test signals with varying signal moments. Good agreement is found between the simulated and theoretical results. The effects of observation time (length of detection window) and sampling interval on detector performance are also discussed and illustrated with computer simulations. The prediction formulas indicate that decreasing the observation time or the sampling interval (assuming the signal is sufficiently sampled and the detection window contains the entire signal) improves detection performance. However, the rate of improvement is different for the three detectors. The SNR required to achieve the minimum detectable level of detection performance at a given probability of false alarm (Pfa) decreases with the fourth root of the observation time and sampling interval for the cross‐correlation detector, the sixth root for the bicorrelation detector, and the eighth root for the tricorrelation detector. Relative detector performance also varies with Pfa. The probability of detection (Pd) for higher‐order detectors degrades less rapidly with decreasing Pfa than the Pd for ordinary correlations. Thus higher‐order correlators can be especially appropriate when a very low Pfa is required
    corecore